Goto

Collaborating Authors

 complex event processing


Fuzzy Rule based Intelligent Cardiovascular Disease Prediction using Complex Event Processing

Kumar, Shashi Shekhar, Harsh, Anurag, Chandra, Ritesh, Agarwal, Sonali

arXiv.org Artificial Intelligence

Cardiovascular disease (CVDs) is a rapidly rising global concern due to unhealthy diets, lack of physical activity, and other factors. According to the World Health Organization (WHO), primary risk factors include elevated blood pressure, glucose, blood lipids, and obesity. Recent research has focused on accurate and timely disease prediction to reduce risk and fatalities, often relying on predictive models trained on large datasets, which require intensive training. An intelligent system for CVDs patients could greatly assist in making informed decisions by effectively analyzing health parameters. Complex Event Processing (CEP) has emerged as a valuable method for solving real-time challenges by aggregating patterns of interest and their causes and effects on end users. In this work, we propose a fuzzy rule-based system for monitoring clinical data to provide real-time decision support. We designed fuzzy rules based on clinical and WHO standards to ensure accurate predictions. Our integrated approach uses Apache Kafka and Spark for data streaming, and the Siddhi CEP engine for event processing. Additionally, we pass numerous cardiovascular disease-related parameters through CEP engines to ensure fast and reliable prediction decisions. To validate the effectiveness of our approach, we simulated real-time, unseen data to predict cardiovascular disease. Using synthetic data (1000 samples), we categorized it into "Very Low Risk, Low Risk, Medium Risk, High Risk, and Very High Risk." Validation results showed that 20% of samples were categorized as very low risk, 15-45% as low risk, 35-65% as medium risk, 55-85% as high risk, and 75% as very high risk.


It's a Marketing Mess! Artificial Intelligence vs Machine Learning

#artificialintelligence

There are many types of analytics that are used in the security world; some are defined by vendors, others by analysts. Let's begin by using the Gartner analytics maturity curve as a model for the list, with the insertion of one additional term slotted in the middle of the curve: Behavioral Analytics. Descriptive Analytics (Gartner): Descriptive Analytics is the examination of data or content, usually manually performed, to answer the question "What happened?" Baikalov explains that descriptive Analytics is the realm of a SIEM (Security Information and Event Management system) like ArcSight: "these systems gather and correlate all log data and report on known bad activities." Diagnostic Analytics (Gartner): Diagnostic Analytics is a form of advanced analytics which examines data or content to answer the question "Why did it happen?",


Using DeepProbLog to perform Complex Event Processing on an Audio Stream

Vilamala, Marc Roig, Xing, Tianwei, Taylor, Harrison, Garcia, Luis, Srivastava, Mani, Kaplan, Lance, Preece, Alun, Kimmig, Angelika, Cerutti, Federico

arXiv.org Artificial Intelligence

In this paper, we present an approach to Complex Event Processing (CEP) that is based on DeepProbLog. This approach has the following objectives: (i) allowing the use of subsymbolic data as an input, (ii) retaining the flexibility and modularity on the definitions of complex event rules, (iii) allowing the system to be trained in an end-to-end manner and (iv) being robust against noisily labelled data. Our approach makes use of DeepProbLog to create a neuro-symbolic architecture that combines a neural network to process the subsymbolic data with a probabilistic logic layer to allow the user to define the rules for the complex events. We demonstrate that our approach is capable of detecting complex events from an audio stream. We also demonstrate that our approach is capable of training even with a dataset that has a moderate proportion of noisy data.


The Synergy of Complex Event Processing and Tiny Machine Learning in Industrial IoT

Ren, Haoyu, Anicic, Darko, Runkler, Thomas

arXiv.org Artificial Intelligence

Focusing on comprehensive networking, big data, and artificial intelligence, the Industrial Internet-of-Things (IIoT) facilitates efficiency and robustness in factory operations. Various sensors and field devices play a central role, as they generate a vast amount of real-time data that can provide insights into manufacturing. The synergy of complex event processing (CEP) and machine learning (ML) has been developed actively in the last years in IIoT to identify patterns in heterogeneous data streams and fuse raw data into tangible facts. In a traditional compute-centric paradigm, the raw field data are continuously sent to the cloud and processed centrally. As IIoT devices become increasingly pervasive and ubiquitous, concerns are raised since transmitting such amount of data is energy-intensive, vulnerable to be intercepted, and subjected to high latency. The data-centric paradigm can essentially solve these problems by empowering IIoT to perform decentralized on-device ML and CEP, keeping data primarily on edge devices and minimizing communications. However, this is no mean feat because most IIoT edge devices are designed to be computationally constrained with low power consumption. This paper proposes a framework that exploits ML and CEP's synergy at the edge in distributed sensor networks. By leveraging tiny ML and micro CEP, we shift the computation from the cloud to the power-constrained IIoT devices and allow users to adapt the on-device ML model and the CEP reasoning logic flexibly on the fly without requiring to reupload the whole program. Lastly, we evaluate the proposed solution and show its effectiveness and feasibility using an industrial use case of machine safety monitoring.


A Hybrid Neuro-Symbolic Approach for Complex Event Processing

Vilamala, Marc Roig, Taylor, Harrison, Xing, Tianwei, Garcia, Luis, Srivastava, Mani, Kaplan, Lance, Preece, Alun, Kimmig, Angelika, Cerutti, Federico

arXiv.org Artificial Intelligence

Imagine a scenario where we are trying to detect a shooting using microphones deployed in a city: shooting is a situation of interest that we want to identify from a high-throughput (audio) data stream. Complex Event Processing (CEP) is a type of approach aimed at detecting such situations of interest, called complex events, from a data stream using a set of rules. These rules are defined on atomic pieces of information from the data stream, which we call events--or simple events, for clarity. Complex events can be formed from multiple simple events. For instance, shooting might start when multiple instances of the simple event gunshot occur. For simplicity, we can assume that when we start to detect siren events, authorities have arrived and the situation is being dealt with, which would conclude the complex event. Using the raw data stream implies that usually we cannot directly write declarative rules on that data, as it would imply that we need to process that raw data using symbolic rules; though theoretically possible, this is hardly recommended. Using a machine learning algorithm such a neural network trained with back-propagation is also infeasible, as it will need to simultaneously learn to understand the simple events within the data stream, and the interrelationship between such events to compose a complex event. While possible, the sparsity of data makes this a hard problem to solve.


Traffic Prediction Framework for OpenStreetMap using Deep Learning based Complex Event Processing and Open Traffic Cameras

Yadav, Piyush, Sarkar, Dipto, Salwala, Dhaval, Curry, Edward

arXiv.org Artificial Intelligence

Displaying near-real-time traffic information is a useful feature of digital navigation maps. However, most commercial providers rely on privacy-compromising measures such as deriving location information from cellphones to estimate traffic. The lack of an open-source traffic estimation method using open data platforms is a bottleneck for building sophisticated navigation services on top of OpenStreetMap (OSM). We propose a deep learning-based Complex Event Processing (CEP) method that relies on publicly available video camera streams for traffic estimation. The proposed framework performs near-real-time object detection and objects property extraction across camera clusters in parallel to derive multiple measures related to traffic with the results visualized on OpenStreetMap. The estimation of object properties (e.g. vehicle speed, count, direction) provides multidimensional data that can be leveraged to create metrics and visualization for congestion beyond commonly used density-based measures. Our approach couples both flow and count measures during interpolation by considering each vehicle as a sample point and their speed as weight. We demonstrate multidimensional traffic metrics (e.g. flow rate, congestion estimation) over OSM by processing 22 traffic cameras from London streets. The system achieves a near-real-time performance of 1.42 seconds median latency and an average F-score of 0.80.


It's a Marketing Mess! Artificial Intelligence vs Machine Learning

#artificialintelligence

There are many types of analytics that are used in the security world; some are defined by vendors, others by analysts. Let's begin by using the Gartner analytics maturity curve as a model for the list, with the insertion of one additional term slotted in the middle of the curve: Behavioral Analytics. Descriptive Analytics (Gartner): Descriptive Analytics is the examination of data or content, usually manually performed, to answer the question "What happened?" Baikalov explains that descriptive Analytics is the realm of a SIEM (Security Information and Event Management system) like ArcSight: "these systems gather and correlate all log data and report on known bad activities." Diagnostic Analytics (Gartner): Diagnostic Analytics is a form of advanced analytics which examines data or content to answer the question "Why did it happen?",


A methodology for solving problems with DataScience for Internet of Things - Part One

@machinelearnbot

Real-time systems differ in the way they perform analytics. Specifically, Real-time systems perform analytics on short time windows for Data Streams. Hence, the scope of Real Time analytics is a'window' which typically comprises of the last few time slots. Making Predictions on Real Time Data streams involves building an Offline model and applying it to a stream. Models incorporate one or more machine learning algorithms which are trained using the training Data.


A methodology for solving problems with DataScience for Internet of Things - Part One

@machinelearnbot

Real-time systems differ in the way they perform analytics. Specifically, Real-time systems perform analytics on short time windows for Data Streams. Hence, the scope of Real Time analytics is a'window' which typically comprises of the last few time slots. Making Predictions on Real Time Data streams involves building an Offline model and applying it to a stream. Models incorporate one or more machine learning algorithms which are trained using the training Data.


A methodology for solving problems with DataScience for Internet of Things - Part One

@machinelearnbot

Real-time systems differ in the way they perform analytics. Specifically, Real-time systems perform analytics on short time windows for Data Streams. Hence, the scope of Real Time analytics is a'window' which typically comprises of the last few time slots. Making Predictions on Real Time Data streams involves building an Offline model and applying it to a stream. Models incorporate one or more machine learning algorithms which are trained using the training Data.